Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. Moreover, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of semantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL.
translated by 谷歌翻译
半监督语义分割的流行方法主要采用了使用卷积神经网络(CNN)(CNN)的统一网络模型,并在应用于输入或模型的小型扰动上实施模型预测的一致性。但是,这种学习范式受到a)基于CNN模型的学习能力有限; b)学习未标记数据的判别特征的能力有限; c)从整个图像中对全球和本地信息的学习有限。在本文中,我们提出了一种新型的半监督学习方法,称为Transformer-CNN队列(TCC),该方法由两个基于视觉变压器(VIT)的学生组成,另一种是基于CNN的学生。我们的方法巧妙地通过伪标记来纳入预测和异质特征空间上的多级一致性正则化,用于未标记的数据。首先,由于VIT学生的输入是图像贴片,因此特征地图提取了编码至关重要的类统计。为此,我们建议首先利用每个学生作为伪标签并生成类吸引功能(CF)映射的班级感知功能一致性蒸馏(CFCD)。然后,它通过学生之间的CF地图传输知识。其次,随着VIT学生对所有层具有更统一的表示,我们提出一致性感知的交叉蒸馏以在类像素方面的预测之间转移知识。我们在CityScapes和Pascal VOC 2012数据集上验证了TCC框架,该数据集大大优于现有的半监督方法。
translated by 谷歌翻译
在现实世界应用中的深度神经网络(DNN)的成功受益于丰富的预训练模型。然而,回溯预训练模型可以对下游DNN的部署构成显着的特洛伊木马威胁。现有的DNN测试方法主要旨在在对抗性设置中找到错误的角壳行为,但未能发现由强大的木马攻击所制作的后门。观察特洛伊木马网络行为表明,它们不仅由先前的工作所提出的单一受损神经元反射,而且归因于在多个神经元的激活强度和频率中的关键神经路径。这项工作制定了DNN后门测试,并提出了录音机框架。通过少量良性示例的关键神经元的差异模糊,我们识别特洛伊木马路径,特别是临界人,并通过模拟所识别的路径中的关键神经元来产生后门测试示例。广泛的实验表明了追索者的优越性,比现有方法更高的检测性能。通过隐秘的混合和自适应攻击来检测到后门的录音机更好,现有方法无法检测到。此外,我们的实验表明,录音所可能会揭示模型动物园中的模型的潜在潜在的背面。
translated by 谷歌翻译
半监督学习在医疗领域取得了重大进展,因为它减轻了收集丰富的像素的沉重负担,用于针对语义分割任务。现有的半监督方法增强了利用从有限标记数据获得的现有知识从未标记数据提取功能的能力。然而,由于标记数据的稀缺性,模型提取的特征在监督学习中受到限制,并且对未标记数据的预测质量也无法保证。两者都将妨碍一致培训。为此,我们提出了一种新颖的不确定性感知计划,以使模型自动学习地区。具体而言,我们采用Monte Carlo采样作为获得不确定性地图的估计方法,该方法可以作为损失损失的重量,以强制根据监督学习和无监督学习的特征将模型专注于有价值的区域。同时,在后退过程中,我们通过增强不同任务之间的梯度流动,联合无监督和监督损失来加速网络的融合。定量地,我们对三个挑战的医疗数据集进行了广泛的实验。实验结果表明,最先进的对应物的理想改善。
translated by 谷歌翻译
本文回顾了关于压缩视频质量增强质量的第一个NTIRE挑战,重点是拟议的方法和结果。在此挑战中,采用了新的大型不同视频(LDV)数据集。挑战有三个曲目。Track 1和2的目标是增强HEVC在固定QP上压缩的视频,而Track 3旨在增强X265压缩的视频,以固定的位速率压缩。此外,轨道1和3的质量提高了提高保真度(PSNR)的目标,以及提高感知质量的2个目标。这三个曲目完全吸引了482个注册。在测试阶段,分别提交了12个团队,8支球队和11支球队,分别提交了轨道1、2和3的最终结果。拟议的方法和解决方案衡量视频质量增强的最先进。挑战的首页:https://github.com/renyang-home/ntire21_venh
translated by 谷歌翻译
可以使用求解动态系统的数值方法来构建卷积神经网络,因为网络的正向通行证可以视为动力学系统的轨迹。但是,基于数值求解器的现有模型无法避免隐式方法的迭代,这使得模型在推理时效率低下。在本文中,我们从动态系统视图中重新解释了预激活残差网络(RESNET)及其变体。我们认为,隐式runge-kutta方法的迭代融合到了这些模型的训练中。此外,我们提出了一种基于高阶runge-kutta方法来构建网络模型的新方法,以实现更高的效率。我们提出的模型称为Runge-Kutta卷积神经网络(RKCNNS)。在多个基准数据集上评估了RKCNN。实验结果表明,RKCNN优于其他动态系统网络模型:它们具有更高的精度,资源较少。他们还基于动态系统的数值方法扩展了网络模型家族。
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
A noisy training set usually leads to the degradation of the generalization and robustness of neural networks. In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels. Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels. In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model. We theoretically show that SPR can recover clean data under some conditions. Under general scenarios, the conditions may be no longer satisfied; and some noisy data are falsely selected as clean data. To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data. To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets. While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data. Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR. Our code and pre-trained models will be released.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译